perm filename ANALOG[RDG,DBL]11 blob sn#656170 filedate 1982-05-04 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00027 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00003 00002	  Analogies, and things like that...
C00004 00003		Outline
C00006 00004		Abstract-ette
C00007 00005	Why am I doing this?
C00012 00006		Overview of this document
C00020 00007	Justifications are important.
C00022 00008		III. Naive Analogy
C00035 00009	Things belonging in the Meta-KB
C00039 00010		Examples of Abstraction
C00050 00011	@Section(Basis for a Good Analogy)
C00067 00012		?. Abstraction - description
C00075 00013		Towards a Definition of ABSTRACTION
C00084 00014		Multiple Abstractions
C00087 00015		Where does this go -- talking about various uses of abstraction...
C00093 00016		IV. Tying ANALOGY to ABSTRACTION
C00101 00017		VI. Other ANALOGY work
C00106 00018		VII. Goals of this system
C00111 00019		(Goals, con't)
C00117 00020	The bulk of the research effort will be devoted to the task
C00119 00021	Method for :
C00121 00022		Conclusion
C00129 00023		Glossary
C00132 00024	Mike comments:
C00136 00025		Bibliography
C00137 00026		Outtakes
C00153 00027	@BEGIN(Multiple)
C00155 ENDMK
C⊗;
  Analogies, and things like that...

see also	NAIVE[the,rdg] for a beginning of a naive analogy treatise.
		EXAMPL[the,rdg] for body of examples
	Outline

I.  Introduction
  A. Motivation  - people use it all the time
  B. Overview of this paper
	- Purpose
	- Section by section outline

II. Research Programme
  A. Goals of the system
	Include Disclaimer
	- on type of algorithm; not psychological study or philosophical argument
  B. Programme steps
  C. Evaluation/Validation
  D. Details (time scale, domains, ...)

III. Naive analogy
  A. Properties
  B. Dimensions (types, ...)
  C. Examples

IV. Types of analogy
  A. Organization
  B. What they have in common
  C. Which this program(me) will cover

V. Comments on Analogy Systems
  A. Ubiquity - NL, ...
  B. Other AI can be viewed as limiting cases of this defn

VI. Goal behaviour of program
  A. I/O Pairs
  B. Two modules

VII. Actual program design

VIII. Conclusion
  A. Ignorance
  B. Future directions

<<<Appendices>>>

A. Glossary

B. Bibliography

C: Further Examples
	Abstract-ette
This thesis describes a general mechanism for generating and evaluating analogies,
together with the implementation of such a program.
This process is guided by a explicit corpus of heuristics,
which the user can adjust to produce analogies which fit his specifications.
Why am I doing this?
	(perhaps with different emphasis)

This work deals with the use of analogy, during the second phase of
Knowledge Acquisition -- (using JSB's three fold scheme).
This section supplies a brief motivation -- why I am looking at
analogy, why analogy as a teaching aide, and why is it being used
for Knowledge Acquisition -- i.e. to input facts into a growing
knowledge base.

Why analogy?  The primary reason is my interest in this area:  few processes
seem as ubiquitous, or essential, to intelligent thought as the ability 
to form and understand analogies.
As I mention in @Cite(NaiveAnalogy),
processes as (at first glance) divergent as understanding language, 
formulating new scientific theories, and appreciating music
all seem to require a non-trivial analogizing ability.

Despite the great interest in this phenomenon, from philosophers and
psychologists as well as AIers,
there seems no concensus on how this process operates, 
or even on what, exactly, it is.

Why the use of analogy as a teaching tool?
In @Cite(NaiveAnalogy) I list a number of uses of analogy -- ranging from
communication and representation to "discovery".  The one I consider most
tractable (and, as I'll explain soon, most testable) is explanation --
where the speaker has an idea to communicate to a competent, if less
knowledgable, hearer.  As an (as of yet unverified) research claim,
I feel that the other (more sophisticated?) uses of analogy will all
make heavy use of this "module" -- that is, this particular facility
embodies the "central" (core, primary?) use of analogy,
which the other processes can utilitize and expand.

Why for a KA task?
I have two main reasons:
First, usefulness:
The current central theme in Expert Systems today is KA -- this represents
the major bottle neck impeding the development of these systems.
Corollary 1 claims that, gee, if only we had analogy, this task would
be so much easier...
Second, testability:
Hooking the results of an analogical derivation to an existing, running
expert system renders those results ?easy to test? -- the improved system
either gives meaningful results, over this new (sub)domain or it doesn't.
(Still leaves open a few issues -- like whether it cheated or not, and
whether it was really using analogy or not...)

What do I hope to accomplish?  First, some relevant, (scholarly?)
inter-disciplinary research on this <ubiquitous> issue of analogy --
one which sheds some light on this unfortunately sloppily pursued
area.  Second, a running program which may be usable by other researchers
for their projects.  Third, a clearer understanding, in my mind at least,
of some of the underlying processes which are going on in our heads
during "intelligent" activities, as manifest by our frequent, and easy
use, of this complex reasoning process, analogy.
	Overview of this document
To provide a grounding for the balance of this paper,
Section 2 will briefly sketch this overall programme.

Section 3 attempts to formalize what we mean by analogy.
It includes a taxonomy of "types of analogy",
as well as examples of this each (respective) type of analogy in use.
This particular research task will address only a (proper) subset
of all "legal" analogies --
this subset will be given here.

Section 4 will use a brief literature search to
demonstrate the appropriateness, and scope,
of this model of analogy.
We will see that a large number of existing
analogizing process (both AI and "natural")
are, in fact, but special cases of this approach.

Section 5 (finally) describes the goal program.
It begins will a sketch of the behaviour (i.e. input/output pairs)
for the target analogizing program.
We will then motivate why this, alone, is insufficient -- 
indeed, we make the claim that no single, unalterable analogizer 
can possibly be all things to all people.
Clearly the user must be permitted to input his own (inherently subjective)
criteria for ranking competing analogies,
or for generating apt analogies.
Each user should be able to modify this analogizer,
by simply enterring his particular set of heuristics.
This task - of facilitating the input and incorporation of these new rules -
is performed by the second, major module of the running program --
the analogy-criteria refiner.
It is this modifiability which distinguishes this system from
most other (AI) analogy systems,
and adds respectability to the overall program(me).

The final section gives the real meat of the paper.
Here we will describe how we plan to actually build
a computer program capable of using analogies --
more precisely, how to design the code needed to
generate and use relevant analogies between a pair of models,
for the purpose of communicating.
Each of its two, almost-independent pieces 
(i.e. the actual analogizer and the analogy-criteria refiner)
will be described to some detail here.

The conclusion section will provide a running description of the current
"state of the system", together with updates, and other rethinkings done
by the author after the bulk of this paper has been "written in cement".

Three appendices will follow.
Appendix A is a (partial) glossary, providing a list of our definition of
various terms.
Appendix B is the bibliography, and Appendix C completes the corpus of examples,
begun in Section 3.

----

The real purpose of this report is to specify a full research programme.
The subsequent sections sketch some first ideas on analogies,
which this programme is designed to address.
This section serves as a preface, summarizing our overall goals and aims --
to present the whole picture,
in which to ground the "details" which follow.

We will first present the overall goals of this system -- including both
what we intend to prove, and which arenas we are intentially NOT enterring.
Next is an outline of the steps we will follow towards acheiving these results.
The final part of this section gives some pertanent details -- things like
(our initial guess) at the timings for these steps,
what things we anticipate will be on the "critical path",
and some thoughts on the eventual implementation specifications --
such as representation language to use, or desired control regime.

A final warning --
because of the early location of this programme specification,
it will, necessarily, include a great many "forward references" -- terms
which will not be well defined until some later section.
The interested reader, unwilling to suspend his curiousity, (or unable
to make the necessary, if temporary) leap of faith,
is referred to the glossary provided in Appendix A to help fathom the
terms.
Otherwise s/he is encouranged to accept the terms as still-to-be-defined
entities, trusting that these definitions will indeed follow.

---

Our use of the possibly ambiguous terms, such as Analogy itself,
are based on the definitions given in Appendix A.
They are designed to provide a formal basis for discussing
what WE mean by these terms,
and to indicate whether some mapping is, or is not, an analogy 
(by our definition of the term).
It is within this framework that we
will talk about things like "good analogies", or "apt
abstractions".
{ftnote: We will see in Section 5 that there is some evidence that people do
in fact use something like this process;
and that many of the common uses of analogy
in various philosophical literature can be viewed as a simple case
of the general, encompassing definitions we will give.
However, such evidence should be used only to confirm 
the utility of this basic approach -- ie that Mother Nature found it useful,
and that observers have noticed (and skirted around)
something like this basic mechanism.}

Justifications are important.
	III. Naive Analogy
<see NAIVE.MSS[the,rdg] -- this to be eliminated>

Before presenting the meat of this thesis proposal, we will first list
a small collection of examples of things which we consider viable instances
of analogy or of an analogizing process.  The breadth and variedness
of these examples should point to the ubiqity of this phenomena in
our day to day activities...
In the first few cases, we will present a "quick and dirty" version of
a problem-solving method for handling such analogies.

i) Literary Metaphors:
a) Matching of "outstanding features"
Q: How is Oscar Wilde like the ancient Greeks?  Both were gay.

Q: How is the captain of a ship is like a bride (in her wedding dress).
Both have difficulty in navigating, at least when the bride has a
long train following her.

The analogies here are determined by asking what special features the
two analogs possess, and then seeing which of these are shared.
Notice there is no a priori someone would associate sexual preferences
with a society -- only in the case of the ancient Greeks is this a salient feature.

b) Direct Literary(sp) Metaphor

Here the match connects what people consider ordinary features, as opposed
to aspects which standout.
Even within this category there is a large range of generality, 
and of explicitness.  Some metaphors reveal little about the implicit
connection -- the "all x are like y" cases -- while others provide
that necessary mappings explicitly spelled out --
using something like "x is related to y because f(x) = f(y)".
	People are like birds.
	John is like a bird.
	John eats like a bird.
	John eats as much as a bird.
	Tom eats like a small, full bird.
	John eats as many sun-flower seeds as most birds eat.
	John ate as many sun-flower seeds on June 24 as Polly parrot ate that day.

This type of analogizing could be achieved by relatively simple
property list matcher.
This will work before we assume facts like 
"standard substantce & quantity consumed"
will automatically be included on the property lists of objects like people
and birds.

c) Indirect derived feature
Neither salient, nor immediately stored.
Eg American economy like shoe lace: both can ...

"Now is the winter of our discontent;
Made bright by the sun of Bolingbrook" - or something like that
	- connotations of winter

"Or what a rogue and peasant slave am I"
	- Hamlet isn't, but it is as if he was...
"Juliet is the sun, -- from the east --"
	- standard example [winston] - where brightness (irridaence and happiness)
	and position are played on.

`Solar metaphor' - explanation for myths -  popular ca 1890-1910.
Here connection unusal; unless one happened to have that focus.
Of course, there are many other possible connections, as there were in the first
"People are like birds" case.

(This standard game, or challenge -- any pair of things can be connected.)
Sometimes by virtue only of name.
(meta level)
Appropriate respnose is groan, and nod in agreement: "Yes those two things
do share that property, but why mention that feature?"

Here most would consider the connection "contrived"
-- eg probably not explicitly stored,
but straightforward to deduce once the other analog is present.

The algorithm here must be more complex:  After looking at the salient features
of the objects, and then at all "primitive" attributes, one would be forced
to hunt, using the other analog as a basis for generating new properties.

d) Sometimes one has to probe even more into the deep nature of some phenomona
to see the mapping:
	"Learning at CalTech is like trying to sip water from a fire hydrant"
In both cases one is trying to ingest (a small part of) of something,
in a situation where a large quantity of that "substance" is being forced out.

	"sword of damacles"
-- itself used as a metaphor of immenent danger,
which is associated (unavoidably) with some power.  Extended to any
"tense" stree-producing situation.  (Or only certain other parts borrowed:
eg the danger which could have avoided by ....   eg my desk having a point 
at small of back high location...)

ii) Extended meaning of terms
a) Natural language
Many terms are stretched from original meaning, expanded to handle some new
situation.
Consider a term like "ancestry trees".
Note that biological trees have lent their name to this other
instance of a hierarchy.
Computer Scientists have also exploited this familiar example of a
well-founded ordering in describing trees, using terms like
root, leaves, branchiness, (no, not bark).
Or "he was feeling down", where we use up-down spatial terms
to describe the linear ordering relation of emotions.
The "time is money" is similar (analogous?) -- again we use the
vocabulary from one domain to describe another.  (See Lakoff/Johnson "Metaphors
we live by"]

CS members have borrowed and incorporated a wealth of terms -- many are now
sufficiently ingrained that it is difficult to remember their "etymology"
[Note almost all examples of things within `"'s are being used semi-metaphorically.]

Anatomy, Physiology, Diagnosis - once only for people, now for computers

or the other way (where the terms originally (well, mos recently) concocted
to describe computer situations or operators are being applied (at least by
CS people) to people:
 channels, bus, swapped out, 

Note the Cognitive Scientists who view both people and machines are "brother"
information processors give some credence to these mappings.

b) Mathematics, etc.
This same phenomena ocurs in mathematics as well.  Consider the way
mathematicians extend familiar functions to apply to new domains:
The + operator was originally meaningful only over reals,
but now this same symbol,
and much of its associated semantics (eg it arity of 2, cummutativity,
and often iverses and distributive and definitional relation to "x")
have been borrowed by other fields - such as fields, rings, groups.
(EG matrices, transfinite ordinals, sets, ...)
That "x" operator has been similarly extended.
There seems an underlying unity to all uses of a given symbol: for example
+ traditionally plays the role of a commutative operator over a set,
with an identity in that set.
Times, while often cummutative, is not alwasy -- eg matrices, or cross product.

Planning, Diagnosis

d) Sometimes it must be done explicitly - by actually describing
the connection: (where both terms given)
Bachelor bear - meaning extended.

c) The task might go some other way -- as in the familiar
"What are the x of y?" type of question,
where x is (well-)defined only in field z.
[How to "extend" x to the alien field y.]
Language : Phoneme :: Music : ?
? might be Interval (Halperin's conjecture), Timbre, or ...

iii) ?
Circle is like a sphere
Recursion is like iteration
Abstraction is like simplificiation/...

Music is like poetry
Music (as sequences) are like math sequences

iv) Relate to situation -- abstract out the salient essense of this situation,
and use this to describe...

Like a Fiddler on the roof -- like day to day existance
	- trying to do delicate thing, in precarious situation.

Feedback

Simulation - occurs in computer, logic (cf Steve example from Kechris)
	showing NP-completeness

To get from A to B (eg when solving a problem)
taking the circuitous route 1,2,3, rather than the "obvious" 4
	   2
	↑ -→- ↓
      1	|     | 3
	.     .
	A     B
  Instances: 1 = Meta, Problem Reformulation, Simulation,
	finding mapping between pairs of theories going thru some model

-----
Anyway, any model of analogy should be able to cover (that is, be able
to explain) such examples as these.  The fact taht some many disciplines
utilize such a process leads us to view this process as very important,
and underlyng much of our cognitive activity.  We hope the method proposed
in this thesis will be worthy of ...
Things belonging in the Meta-KB

This particular example indicates the sort of facts which we now
feel will be useful for this type of analogizing:
@BEGIN[ENUMERATE]
a large vocabulary of relations between relations -- such as the
TransitiveClosure function used above, or things like Inverse,
Composition, Plussing, ...  
(This could go for several layers,
to define commonality between starring and plussing...)

criteria for evaluating "complexity" -- ie why is it more relevant
to have relations which involve corresponding spaces
than merely the same number of relations, or relations which match
only in arity.
@FOOT{There are times when the "exact match of arity" is inappropriate:
for example, one may want the binary ON@-(2)(x y) relation in one language
to match the tertiary ON@-(3)(x y situation) relation in another.}
@END[ENUMERATE]

	Examples of Abstraction
Abstractions:
	Hierarchy, DAG, Directed Graph, Graph, Extended Graph (w/n-ary rel'ns)
			  \→ Weighted, Directed Graph

Note that this abstracting idea is not something to worry about --
even our perception is not "unbiased" and pure, but in fact has undergone
much processing which throws away "noise" and other ...
@Section(Basis for a Good Analogy)

Basically the formal definition given for analogy intentionally 
says nothing about how to rank competing analogies,
or about how to generate apt analogies.
The criteria for appraising the accuracy of an analogy
are by no means universal --
rather it is intrinsically subjective and imprecise.
With this in mind,
we propose a filtering process
based on a *user-modifiable* set of heuristics.
These will be used both for differentiate apt from "trivial" analogies,
and for pruning inappropriate analogies during generation stage.

The real meat of the paper will follow this lengthly "introduction".
Here we will describe how we plan to actually build
a computer program capable of using analgies --
more precisely, able to
generate and use relevant analogies between a pair of models,
for the purpose of communicating.
This program will have two large, almost independent pieces.
One is the actual analogizing system.
This initial program will (necessarily) embody the author's particular prejudices.
The purpose of the second part is modify this analogizer,
by facilitating the input of different heuristics from other users.
It is this modifiable which distinguishes this system from
most other (AI) analogy systems,
and adds respectability to the overall program(me).
[[These new heuristics will modify the analogizer itself,
  honing it to generate better (or at least different) analogies.]]
Hence any user can enter his biases, and see the system produce analogies
he considers reasonable.
(Safeguards will be present to restrict the rules the user may enter,
to insure the analogies generated fit
within the liberal definition we give for an acceptable analogy.)

 definition is sufficiently general to cover a wide
range of "understandable" (if silly) analogies; both good and bad.
	?. Abstraction - description

	<<work on>>
Analogy is intrinsically semantic process -- and all the tools we have,
in a computer system, are inherently syntactic.
In particular, all of the terms come pre-defined, (ie no one knows
how to form new Terms, ...)
[iE only matching ... dπ←←I9KgfA=eIKe∃HAEr↓gC[C9iSFA
←]giICSMiM:~∃α↓aeS←IRAgKαc↔∂SN{9βOF{←Mβ∞sπ3??Iβ';6{3@6↑4ε6NlN2ε∂∞∞&␈π-≤↔&*≤'∨',≤7&N⎇n2ph!Q$Nr∞MεO~∞=ε␈↔D∞&/ε}.Bπ>T∞vNfD∞π⊗␈
}6*ε∀λv.v↑,⊗bπ∞-v≡/>4π>F≤=απ>Tf..D
↔_h.>V66≤=⊗.wEDαF∞l@πε/-↔π~
lV≡/><↔↔J∀∞Fzε≥o∩π∨≡>F.j∞⎇εN≡∧6f∞≥↑2π&t&*ε<≡ε∞⊗LTε}2↓Q&>∞l↑&∂&≥lrε∞lDπ.vL↑'∨&≥lFNvt⊗v∞M|wJpQ(g/↔MW∩b∞|Rπ>≥MBπ≡
}rπ&Tε>.lZ&∞f≡O∩αjT
⊗v&\XBbπ\-↔∂.≡O∩αjT
v $≥~~.4_X<m≤h≤≤M|y<|g1"X;LD_{⊂-≥(~5∧
<h_$
Y8y.>x<↑${{<
⎇Y;]∧[|H≥↑(_m⎇<→=]]λ∩-n→;≠\⎇β!-8;H
}H≠8,=~;Y%d
≤y,T⊗q0(d(∩)(p2(
L8⎇≥.,7*#!(Z;X-M≡(≥lT≥z;
D≤≤[n
|y(∀≠98m;Z<mT→[tD;X;
|z>Z-lkβ"L,<y9∧
{H≥

<h_,.⎇≤X,>~;Yd∞≤[xl↑|kC!!"P{mnz9→.$≤~≤L≡y<h
M:y(∀λXX,=→;≠n$_Y8.$Hβ"EU(≥z
≤zλ∞⎇~;→$∧\⎇≤L↑_z~-lhH≥
(≤⎇≥Y_<LD→→9M≥Z=~-⎇H≠yDX8z]≠|K↓QZ<h∞>~;≠∧∞≤Z=M≤;≠≡$∞;Y→..⎇≠{lD_↑(
.<⎇λ≤[⎇=∧;↑;ml+C"AQR=	n4→8<o∀≥≠hm;Yλ∧.y;→,>~=Y$9≥X-n_9y.4H→[n$≥~~.4_8Z-M=≡(¬U#"[-}⎇λ≤∞-{:;L]]≠≡%D~=λ\{{[m]>Y<d
{H≤nM|X9lT≠yHl8⎇≤eD_;Y∧∞≤[⎇M≤→<c!.{8;
D≤Y8,O(~_-l≠→<d
{H≠≡Yy(=≥;Zn4≠yHL=_+AQP]=∧∞z_=∧
<h≥
(≥;LL<[≡-≥Yh≠,\z_;M≡{(→M}H≥~
≡h≤≤M|y<|gq"Uz≡λ≥≡.<h≠ld_X8m<|[⎇-lλ→_.L(≠=.>λ~=∧
_=Y$=X:-L8[→%D_;Y∧
;H≥m=λ≠n,x;Z/,=~;mgc"P-lλ≥z≡λ_<n>;<≥
≥{\h
↑<⎇λ,(≠8,L(_8M}=λ≥
(≥{n-→λ→M}H≥~
≡h_;L≥≠yz/-;Yh<<_8M≥~=≡!Q]≠h,(≥<l\];∂aQC"@↓A5≠⎇l≡Y≤h∀⊃→9M≥Z=~-⎇H≠p∪λ a)j∀ ab$SgεE']2y;$Y{FEεB$p∞ this paper, we Will regard analogy as a @MQCeK⊂AG←[5←\AC	gieC
iS←\8~∃)Q¬hASf0AoJA]SYXA
←]gS⊃KdAi]↑A←E)KGifQP∨Iε+[↔;'→1βC⊗{∂↔O≡+M!β/#
9$hSS-β⊗)βπ;∞c?∨?/→β?;gIβ'→π##↔eh+O#∂∪∃β¬∧≠?77|qβπ∨#Kπ∂&K?98hRS#'~βO↔∂&K?9βF+3Aβ∨∪gOS∞c'k∃π##'Mεs?S'}qβ?→εOS⊗∂S'}q84(hR¬β←∂∪;';;P4*SFKMβ∪.3';π&K?9β|1βπ∨#Kπ∂&K?9βO→βGWM#∃β∨.s↔Kπb↓55hS';∪.+⊃1β∞c7?O"βS=β&C∃βC}K;Qβ}1β↔Ns≥β[∞≠WW?/→9#OαH4*←*β←'3bβO??rβO↔∃π##πQπ##'MεKEβSzβ∃β-CC↔∂&+⊃↓5jβ'QβO→β←↔faβ/;␈;9βSFP4+ε+?C3*β∂π9ε+πO'gIβ≠'v!βπ;∞c?∨'/→β↔';↔↔9ε;eβε'Iβ}1β?V+∂SLhP4*SF)βC?NsQβ?2βS#'~βS#↔≡KEβ←␈∪%β←Nc1β*βS-βπ∪?['&)β¬βn+∂#πvKO5β4{AβS/≠S';8q984Ph*S↔⊗k';?f{∨d4Ph*≠?∩β↔∂?v{7eβ|1βK↔π∪↔O↔w#πS'}q1βC.{C3∃ε{≠S↔rβG'↔εc'≠e¬##∃β>{K#⊃¬##↔dhSC↔K≡+'[∃ε∪eβ≠∞≠S/KNs≥β?/!β∂?nk?;πfKS'↔~p4*SF+K∃βM→β;=π∪↔πO|qβS=α∪OS?⊗)	βSF)β≠π∨!βS#∂!αC↔&+Iβ⊗+πS#/→βπ'∩aβπ; h*Cπthes air, Mary breathes air, ... when the underlying idea
is that people breathe air, and that Peter, Paul and Mary are all people.
Given this, 
a simple inference procedure can now answer any
standard question about substance-breathed.
We can similarly deduce facts about location-of-heart, or number-of-hands,
or approximate size,
to the single gestalt which houses facts about people in general,
and know that Peter, Paul and Mary will all "inherit" these facts.

These people-related assertions seem universal,
and, in some sense, guaranteed.  
Every person has two hands, for example.  Well, actually not.
The idea is that *most* people have exactly two hands --
by a sufficiently large margin that it will be
more economical to store that fact and 
and realize the additional cost associated with handling the exceptions.
There is a great simplicity associated with storing such almost-univeral
facts -- which offers an economy of storage, and of retrieval time inferencing;
when compared with other schemes (in particular, when compared with
storing all the facts at the leaves of the derived tree).

This "TypicalPerson" node may be considered an ABSTRACTION of the idea
of any given person.  
Realize this TypicalPerson does NOT represent a person out in the real world.
Many of the essential, defining facts pertanent to any real person
will be left out (such as gender) or underspecified --
eg the height value here is simply 4-6 feet, rather than a precise value.
Also not every fact will be universal -- some people have only one hand,
despite TypicalPerson's implicit claim to the contrary.

We may think of an abstraction as representing standard, default information
about members of some class --
in the manner this TypicalPerson relates to the set of all people.
We understand that "usually" any such fact will be true about any
given member of that class.
Nothing is guaranteed, only suggested.

In this simple case, the notion of Person had already been defined.
Other abstractions may implicitly define a new category --
for example,
to deal with the usage of "bachelor" given in the example above,
we may want to talk about "male animals who have no current mate".
These extended-bachelors do indeed form a class, but not one which
any of us would, a priori, have considered.

The basic abstracting process, as these examples imply, is rather
simple:
it amounts to throwing away some of the details about an individual
(or perhaps an individual concept); to leave the necessary "essense" of 
that concept.
Hence TypicalPerson embodies only a subset of the facts true about Tom
(or any other given person), and the typical-extended-bachelor has
only gender (which is male) and basic classification (here animal)
in common from its deriving "bachelor = unmarried male" ancestor.
Other facts are blurred -- made less precise (or abstracted) --
size goes from an infinitely precise numeric value to a general range;
and the concept of "married" is extended to mean "mated with".

We will give more precise details about this process in the following
sections.  For now it is sufficient to regard an abstract of X 
as some intensional object (later, theory) which maintains the
"important" features of X, And throws away everything else.
	Multiple Abstractions

The scare quotes in the preceding pargraph were intentional.
It's not always clear which properties of some object are essential,
and which simply the chaff.
Worse, for different applications, it will be clear that a different
set of facts should be kept.
When viewing Tom as a student, his physical height is totally irrelevant
[* of course, one can always come up with mitigating circumstances
when a student's height is important -- such as in a @E class, or if
he's a student in Boot Camp which does have height specifications for
all cadets.  But in general, these are pathelogical cases; which I
will refrain from throwing in, henceforth.]
while his GPA is an important specification;
On the other hand, actual physical height is of tantamount consideration
when the physical object, Tom, is blocking the light.

	Where does this go -- talking about various uses of abstraction...
	UBIQUITY OF ABSTRACTIONS  (more motivation)

In the next section, when defining our model of analogy in terms of abstraction,
we will see
that the hard part in analogizing is figuring out the appropriate abstraction
to employ.
The rest of this section is devoted to providing further evidence about
the commonness, and usefulness, of abstract.

Our raw, sensory perception does a good job of abstracting our
inputs, at many levels.
No one remembers EVERY detail of a conversation -- only the main thrusts.
Nor could anyone reconstruct every tissue, fiber and organ of the person
he just saw -- even though he might think he "knows" that person.
Of course, even given total recall, people do not observe every possible
measurement, anyway.  Our auditory abilities are 
good only in certain frequency ranges, as are our visual sense.
Moving from rods and cones, neurophysiologists have found a tremendous
amount of processing which occurs along the optical nerve, long before
the intial "pixel level" signal reaches our brain --
in some sense both "weeding out" extraneous information, and encoding
the information in "higher units" -- in terms of edges or regions, rather
than points.
(Note this is another place our hardware has "pre-decided" -- these
higher level primitives might have been FFT of edges, or in terms of
more complex patterns (see Juleasz).
Q: Can people learn new types of primitives? - like with trick glasses?)
Hence our brain's input is, at best, but an abstraction of the real world --
we perceive a simplified, pre-packaged view of the external phenomena around
us.

This abstracting process can be in the form of a top-down guidance,
as well as this bottom-up "hardware" filtering.
As [Minsky] points out, the way we flesh out an image, or correct small
misunderstood sounds into words, uses a process which extracts only
certain characteristics and ignores others -- which smacks of abstraction.
This of course goes on at several levels - from phonemic (or region)
up through assumed 3-D objects (or sentences) to high level "understanding"
of the peripheral world.

--- more here ---
Common pedagogic idea: use a solid example 
-- as people abstract eneral ideas from specific rather well;
and, apparently, have a hard time going the other way.

---

This idea of abstraction seems quite similar to Quine's idea of
obstenstion.  He claims that classes are defined by examining certain
members, and then "extrapolating" from these instances -- apparently
having filtered out the noise -- here details particular to the 
individuals included in this sample.

Others have discussed object definition by extending  from individuals,
rather than "building up" from primitives.
To define the notion of "game", Wittgenstein collects a host of accepted
examples, then finds a "family resemblence" among them.
The single concept of game serves as an abstraction to each of these
diverse instances.
In "Primi...", Winograd also make the point that much of people's inferencing
is devoted to categorizing individuals based on "prototypes", rather
than on primitives.
These prototypes, holding many layers of default information,
are abstractions gleaned from examination of individuals.
	IV. Tying ANALOGY to ABSTRACTION

Alright - 
Let's now return to analogy -- defined as a shared abstraction.
The overall analogizing process seems pretty simple:
To find if given objects (or events, or ...) X and Y are analogous,
compare the abstraction of X to the abstraction of Y.
If these abstraction match, the original objects are analogous.
Unfortunately, any object may have many distinct abstractions;
and while some may be pre-stored, it seems that most are simply generated
as needed, during this matching process.
(Indeed, people seem very good, and fast, at performing this generation.)
Now the problem is seen to be much more difficult.
How does one determine any common abstraction between two individuals;
and how are analogies rated --
ie why is it so obvious which analogies are good and meaningful, and which
are obviously bad.

This rest of this section will discuss three issues which arise when 
concecting analogy to abstraction.
First this abstracting process is probed in more detail,
realizing criteria like appropriateness -- in the context of trying to
find the abstraction to use for some analogy.
(especially when used to generate...)
Then the actual matching process will be considered 
-- just how does one determine that two abstractions are indeed equal.
Given this overview we will sketch the basic algorithm we propose be
used for analogizing -- this will be considerably fleshed out in the next
few sections.

	Which abstraction
A given individual may have many different abstractions, each representing
some (distinct) specific perspective of that entity.
Each such abstraction is useful for deducing facts about that individual
in some situations.
For some applications it is appropriate to regard Peter as a physical object
(eg when we notice his body is blocking the light,)
while for others his student perspective is much more pertanent
(eg when determining his GPA).
Similarly a building may be a physical object, or a memory-filled object
(eg a Home.)

So given a pair of objects, how does one find the best common abstraction?
Well, this is clearly a heuristically guided search, but through what space?
And what are other factors to consider?

Given the usage of abstraction above, constructing an abstraction seems
a fairly trivial operation: simply removing some attribute.  Of course,
this does not guarantee the result will be at all useful.
<give an example>
Apparently the decision of which fact to remove, or which relation to contract,
is important. 
This leads to the second question above -- what other considerations must
come into play?  
Depending on this context, the same pair of objects may fit into quite different
abstraction, ...
(Eg the purpose of this analogy -- whether it be to inform, or to 
store, and to whom (perhaps between homunucli inside the brain) 
- is quite important.

	How to match abstractions
Technically, we insist that the two abstractions match exactly.
However, (for efficiency?,) we can relax this a little, and permit
some short distances to be traversed -- ie the final close fitting.

Change of variables is allowed.
What if one is an extension of the other: then use the ...
Perhaps the corresponding relations are not equal -- one may take
more arguments, for example.  Then consider the (slightly non-standard)
operation of restricting the more precise operator, to ...
...

	Basic process
Find commonalities of objects -- and use this to expand this common set into
a coherent abstraction -- ie add in features which must accomondate those
already present (relaxation).
This process may well be: develop along one dimension, then (pehaps)
retrench, when probing along another.

What about existing, pre-stored abstractions?
These useful to determine which facets go together -- in any other way?
Yes: as first guesses for way to expand feature set.

---- Other things to be said ---
*Types of analogy 
I assume (this is subject to verification) that the "goodness" of
the `X is analogous to Y' will depend on the A which is the abstractin
common to both X and Y.
(Note this A refers to the "real" abstraction, which, (sigh) need not
be the result my program will return.)
There are various criteria which can be used to judge analogies, or
astarction.

Good - clear cut and shart
	(not obvious ones are not interesting)
Bad - Strained -- the path was long, and unmotivated
	obvious

Necessary -- ie there really is some common underlying phenomena which
	forced this connection -- not jus serendipidy

*Ways of describing analogies
	Both are 4 legged creatues w/cold noses VS both are dogs.
	Note the second subsumes the first - and is thhrefore better.
	  [ie are deducable]

*What can be analogous - "syntactically"
Prototypes to Prototypes
Individuals to Prototypes
Individuals to Individuals

What about Relations to Relations?
 or are these just Individuals (like the constants)
	How about in the context of ssme other match...

	VI. Other ANALOGY work

	ubiquity
As the comments in the initial disclaimer implied, analogies seem
ubiquitous.
NL - see Lakoff's book -
 also TW's stuff, with Bachelor bear.

...
Analogies inherently communicative -- ie from one source to another,
at a high baud rate.  Note both sender and reciever may be internal.

Analogies occur at various depths.
Some are quite superficial -- for these a rudimentary feature matcher
is ambly adequate. In other cases, the connection is considerably deeper.
Consider the connection joining an organization to trees.
Here we need to notice that
	Branch (of tree)
corresponds to
	BossOf (of corporation).
Why?  Well, the reason deals with the fact that
both map from X onto Xs, and that their respective inverses
are only X to X (ie only onto one).
So far this only describes a loose multi-hierarchy -- what prevents
circles? or defines a unique top most node?

Well - the latter is straight-forward:
	Note that President is a special employee - who, by definition,
has no Boss.  Similarly Trunk has an isomorphic condition, with respect
to sprouts.  Now the question is how to find this, fast.

The answer may really be special hardware - which accomodates this
"spreading activitation" type of search.  Or may consider definitional
facts: eg Employee is DEFINED based on this type of employed relation --
which forces an immediate employer.  What of Tree? well, the branch is defined
from the point of seperation (ie of its branching).  So here we have something
special about the branches-from slot.  So maybe this does work...

	other systems
Common abstractions -- found because
Can be explained in this model -- things like simple Feature matching was 
successful because any set of common features points to something in common;
superficial as it may be.
Sets of features worked better - people, at least (for economy of "thought
space") find prototypes a good mechanism to follow.
Still perhaps at a possible superficial level.

Going a bit deeper: it is unlikely two items
would share some relation amongst n-tuples of their respective sub-parts just
by chance -- ie "chances are" there is some underlying reason for this.
PF: more variables ...

Extend this a bit farther, towards underlying causal models -- which, at this
abstraction, do match... Here not just relations, but perhaps 2nd order facts,
about relations amoung relations, which are important.
The problem is we have to find some SYNTACTIC method - dealling only with
the particular set of features/particular decomposition proposed -
which "instantiates" an inherently SEMANTIC property joining a pair of
entites, which is that property of analogy.  These earlier methods
looked good, but hedged around the real issue - of what is the actual
analogy.
	VII. Goals of this system

So much for overhead.  
While it did present arguments indicating
why analogy is a good, ripe area to study, the introduction
never mentioned just what analogy was -- ie what behaviour an
"Analogy Machine" must be able to exhibit.

Below I list the sort of operations which such a device must be
capable of performing.
All of them fit into the scheme portrayed below:
The Analogizer begins with a wealth of pre-existing knowledge, which
is used for all the queries.
The user enters some inquiry, and possibly provides some context as well
-- ie a statement of the purpose of this question.
(Note: there may be other ways of getting that context information...)
[Further note - this information serves to constrain the possible
abstraction/analogy generators...]
<<<Explaining to Z, talking to Z, storing away (ie what is best retrieval
index), poetic vs illustrative, causal vs seredndipity>>>
The Analogizer then computes an answer, which is both returned to the user,
and stored as a new part of the knowledge base.

	    → - - - - - →    Context
	   ↑			|
USER - - → |			|
	   |			↓
	   ↓		|---------------|
			|		|
	Inquiry	 --->	|   Analogizer	|  --->  Answer
			|		|	   .
			|---------------|	   .
				↑		   ∨
				|		   .
				|		   .
			     Knowledge		   ↓
			       Base     ← - - - - ←


Let's now consider the types of question, and expected actions and responses.

I. Analogy Finding (or Analogy Determination)

This takes a general question like
	X iq like a ?,
together with a @MiCiK5K]hA=HAiQ∀AakeA←gJ~(∪J]N8XAi↑↓KqaY¬S\A0↓iVAa∃`O?rαA↓#LqβS↔⊗kMβ?2βS#'v;MαAεk'∨#"βW;∪/∪GSπv!1$4T∧⊗v"∞,W'<[\d(_p↔[1tyrH9z0z→vrz7βE	? = Y.

αAs a sIde e@→IKGh0AiQJ↓Hπ∂"βS#π ∧¬B?4⊗v"∀w~ε≤,Rε⊗|¬~∧I|h∞≥≠⎇;D_Y(∞≡≠|Y,D_=x/∃β"P.l:;_,-→(∃
t_8	`3ist SubseQu`Mh↓S]ckαK@⊗N↑5`hPβ"R)∃β⊂ w_v7s|H)0z4[w0v$↑0z4g[εE*2[4∧s Why "X is Like Y" --i`
Ai!JAC]MoKdAβ;'31αCπ3←∂KM⎇%∧∧&*ε|dπ&FTf␈⊗QQ"αre`ε⊗.<≡W≡
-w&Bε∞vD∂∩ε∂,T¬B|hKAQJ⊃[n⊂:44\P:0yZP:42H(:y8≠yrP$\β optional,)¬

---The ones below are clearly subtasks of The f@%aghAQo↑AaIS[CedA←EU∃GiSm∃fPAC	←mJ\4∀ZZZ↓⊃WoKYKdXAQQKrA5CrAE∀AkgK⊂ACfAMiCMH↓CY←]∀Aae←
KIke∃fP⊃βW+OQβNqβ∂π≡)84(hR&&%rαπ;πf{↔eα⊗;/'v84*SFKMβ∪/#↔K7Ns↔Mβ>C↔S#/⊃αaβO→β7?⊗)β3'↑)β¬αK	βS#∞qβ3'↑)β¬αK⊃1β?∩β;?Qh)#'rβ∂?;&+cQααI84)Eβ↔K#∂βMβ'"β∂?Wf!βO'oβ3eβ>K[∃β
β;W7/∪'
↓⊗;??∪v+OM	π;↔'∨G!βS=ε+π∂!ε{→βSF(4)
Bβ'MβfK/∃αK	↓#'rαA%	ε;⊃↓∃Aβ'Mεc'/∃¬II↓#NqαA%∩βπ;πf{∨'↔~p4*;␈#∃βO.≠!β¬π3π3W*β7'∨G!β∃εs↔∨π&K[∃1εK→βO}k↔S#Ns≥β'~β¬β#␈∪K'⊃ε;π3};e9$hP4*&2qαπ;∞c?∨e∧K;∂?↔β?Kπ&K?84TK→βSF)βCK␈β?O↔"↓
aβO→β3'↑)β¬αJ↓#'9¬↓%αo⊗+∂πW≡)αju∩βπ;πf{∨eβO→βπ∂≡+CSπ⊗c∃04W##πQε3π∂QεKMβO&{K↔⊃εK9βSF)α/;␈;3↔∪>)απ≡)9↓"F+K∃β&C∃βW≡+Iβ'~βS?3"↓
∩>t)	9$hR?S#/∪←'O*βS#∃π+O↔IεKMβS}c⊃β←GIβS#O→βπ;∞c?∨eπ;πMβ.sπ∂∂/βSπf)84(0%">{π3Mbβ∂?9?!$4(hR;?]εc↔Q∨~β∂?;≡K∪↔Iπ##∃βNsS↔Kv3Mβ}1βS#∂!απ;∞c?∨'V+Iβ␈Ah4*≡c↔πKgIβ'QεkWOQεCπ[∃ε	απ∨#Kπ∂&+Iβ␈A1β←FK∂!βneβO/∪[∃β
β[πKN+Seβ}04+≠.s∂S'}sM84T3'KO"aβ∨'6+9αaε;⊃β
βCWKε{O∃αα↓#O#␈+3⊃αJβ∃β/≠';≥∧;?π1εC↔K∃zβ?Iβ≡{;S↔G!⎇%0hQ#'∃ε3?;:βS#∃εc↔MεOO?≡KπS↔"β←'SBαA$4VKQβO.;∨↔O'→βπ9εOS⊗∂S'}qα¬8hP4*SFKMβ∪}+Mβ;␈!βO?g3∃βSF)β←#}c∃βC⊗{3↔jβ?→β∞sπ3??I04+∂→β'QεKMβ;␈!β∂3.Iβ#␈9βS=ε3';⊃π##∃β∂βP4+63W∃ε{→αAπ;#'∂BβO↔K6+MβSzβπ;πf{∨'k*αaβSzβO?7*αe84Ph*π;␈##↔Iε≠?;ON#↔Kπ&K?9iπβ↔?Cf)β?≠&+9β∨/!β←↔&;↔⊃↓jiβπ;"β?;3Jβ∂?;≡K∪↔Iε≠↔KS∞K84+&K7↔;≡K?;Mε{→βπ↔≠SKπ∨#'?9r↓↓"&*β¬β#∞k7↔IεKMβW≡+⊃βSzβ#'Qεsπ'3~`4+πv!β'Mεs?Qβ≡{;O'&+K↔⊃εMβ¬π;??∪.q↓#πv!βS#/∪↔≠?⊗)βW⊗sπ3*Iβ?V+∂Q9rq$4*≡cπ'5RβS#'~β'Mβ⊗+∂πW≡)βC↔␈β3∃β}3S↔9εCπ[∃ε	βO↔"β?→βπ∪∃7∪.3';↔"βπO'∪π∂SN{;L4RC←#'≡AβC↔↔#π'9ε+'S#/⊃βS=π≠C↔∂N3'
β}∪+↔∂'→↓#↔:βS=βF77↔↔→%04V{IβSzβ∂↔K&'9β>+;↔K∞aβ∂3∂≠O↔MαC'∃β&yβS?}cMβ'rβ∨↔;/∪π1%Xh+π;"βC↔?εc∃β∨zβS#K*βS#↔≡)β≠'↔≠Q↓5jβ'9β6∂Q1π##∃βπ∪↔O↔w≠∃β?2βS#↔≡)βS↔v!βS=εK;#'⊗KP4+ε+?C3*β≠K?jβS#'v[';≥ε?W"β?S#/⊃βC↔↔≠C↔∂&K[↔M}OS⊗∂S'}sM1β>C'∂!εkπ/↔_h+S#.KIβC⊗{3↔jkO?3/#'?9π#πO-εk?K∃ε#'≠≠N≠W3Qph(&?2β∂?W↔≠∃1β&C↔K∃εKMβ;zβK↔π≡{9βSzβ3'7O!β?W∩βCK??∪π5β&yβS#O_4+∪.3'∂'.s∂e↓jiβ#?>+[↔IbβS#↔⊗)β'Mπ≠?7∃ε+∂?;}keβ'r4)#JIβπO∨+7';:βS#∃πβ↔KOε+∂S'6)β←#N≠!β←␈∪/↔⊃ε{;∂∃π;'31π≠S'3bβ←?KZβ3πS/⊂4)#NI%βO&{CC'v9βS#*βO↔π⊗≠!β≠␈⊃β;↔:βπO'∪π∂SN{9βπ7#↔Iβ&C∃β≠O∪OQβ6+]βπ'#↔7C'_4('F[∃β6'3↔"p4)5jβ←∃∨faβO↔*i44(hRS#'~απO'∪π∂S␈⊃βO#␈+3⊃β⊗)βπf)βS=π#↔31εC?]β>{?⊃↓FK∃β#␈9βπCπ∪?CKNS∃$hSO?7*βπO'∪π∂SN{9β'~↓55β?+'∪↔"βeβ
βO↔Qε{→β#/+K'O&K∂M8hQqqq∧IβS#Ns-βSF)β';≡{KC?⊗S'?w→β?→π##↔O*β#↔W⊗KOS'∨→β'Mπ##∃β6+KeβF+πKPhS?→β&C'Mβ␈3↔KπfaβCK}S↔∂Qr↓απ9εK;'SN1βO/!β'MεK;∂3.#↔⊃βfS↔IεK9βSFKMβC⊗{C?O∞a9$4Sqyx4Ph)55jαOW&O/MεK;∂3.#∀4*f{∂πS*βπ31πβK↔∪.3';↔"βπO'∪π∂SN{;Mβ}1αa8hP%#'*β↔3'nKπS∃π≠?7∃πβK?C~aβπ;"βO↔∃εK⊃βSFQβπ↔≠SKπ∨#'?9εKEβπg∪↔π∪Jβ'9β&C∃α.∩H4*K∂#∃↓>{?∪;/≠M	β|1βα¬εKMβπrβπO'∪π∂SN{9β?2αa≥_hP4)qcaα∨'6+9αaε;⊃β∞qβπ≥#Cπ∂&K?9β|1αa1∧	9α;⎇9β≠'v!βS#*βπO'∪π∂SN{9β?2αeβπd{;≤4W##'Mπ≠π7∃π≠↔Qβ}1βπc/→9↓αF{]βSzβK↔C⊗+Oπ;"βS#∃εOS⊗∂S'}qβK↔fS'?raβS=¬β↔K↔LεBπ&
≡2ph The bulk of the research effort will be devoted to the task
of determining what makes some analogies appropriate and others laughable.
There obviously can be no "algorithmic", or "universal", method
for deciding this issue.

Reflecting this idea, the final program will have two parts:
One part will use a set of pre-enterred heuristics to generate analogies.
(These will, necessarily, correspond to certain prejudices of the part of the
author(s).)
The second seperate subsystem is designed to changes this corpus of heuristics.
It will facilitate the incorporating new "analogizing and abstracting heuristics" 
proffered by subsequent users, within the guideline that these applying these rules 
results in an acceptable analogy.
{ftnote: Eventually, perhaps, another part of this system may be the source of
these heuristics --
driven by empirical observation, or past successes, or modifications of other
almost successful heuristics.)
It is this capability which will insure the long term viability
of this program.
Method for :
[For the time being, will cauche everything in RLLy notation -- discussing
units and slots, and ...,... Later will expand, if necessary.]
)
Why is X like Y?
	Find BEST abstraction, A, above X and Y:
  Consider RELEVANT features of X (resp Y)
	Definitinal ones are best
	[Note intricately defined ones likely to be useful...]
	Only SEMANTIC ones -- ie "Both enterred by Lenat" is, at best, weak.
		only acceptable as last guess
For prototypes:
For matches - look at set of allowed slots  of typical member
	Identity is best possible!
	Same RangeType - eg List of People
	Same Format - ie same arity (well, extension of it)
	isomorphic rangetype -- eg both are list of same type as this unit.

	Better if essential -- definitional as opposed to merely assertinal
	 [note how good Isa is!]
	Then see if all members happen to have this
	Then usually, finally default.

	Conclusion

It is emanently clear that no one, today, really understands analogy --
this author included.
The underlying purpose of this full research effort is to construct
a first pass at a partial answer to such question and issues --
trying to define just what analogies are, and how they can be used.
As the work reported in this paper is still very much in progress,
much of it will be unclear.  Hopefully the questions addressed will,
in nothing else, stir others to make a next step, ...

Future directions
Psy testings - see if this yields the results this model would predict
and generate (by timeing...)

	Gloqsary
0_xq)CYV↓oSiP↓')(X↓C]HA¬gVAa∃←aYJ↓YSWJ↓→S]I1KrAC9HA)←4Aλ\|x|~∀~)βEgiICGiS=\@~∀%%KCY1rA[K¬]f@E%fABAACeiS¬XAiQ∃←erA=HD@Z4AKNAM←[JA
←YYK
iS←\↓←L~∀$QQ←O%GCXR↓gK]i∃]GKf0Awg|0ASfA¬\ACEMieCGQS←\A=H	βO}k∃β7|#↔1αjβ'L4PJ5βO∂#'O≠N+Eβo∨q04(Ls?S∃π##'Mπ≠πgMεs?S#Ns≥β?2βS#∃ε≠?7Cf+S↔;-≠Mβ?2βoOyrq84(M	iα←GI↓π↔≠SKπ∨#'?9∩βCπSF+IβSF9↓&C↔?KJ⊃|4(L	iα.≠πWO*βS#↔⊗)β7πJβ∃β≡{7∃β&C?K?.;!1β∞∪O?3/#∃βSF+?Keε{→α5bβ∂π3bβ'Qα≠X4('∞s⊃βSFKMβo∨qβ'Mπ∪↔π3gIβ+W∨!β¬β∨+	7SF+?Keε{→βSFQ↓α~p4(4SyαπCπ∪?c'nS'?r4(&/≠';≥ε	βO'oβ3↔IαC'→β/∪K?;.{WM%εk?∪↔b`4('6{Iβ∂}kCWS∂#'?;∞aβ↔π≡)9↓α&C∃βπ∨≠W7C&K?9βO→βS#∂!βS#*βK↔O.cSL4PKS#'~β7?∪.aβCK.#'∂S~βπK∃α∪∂3?≡)	βSzβS#∃π∪↔π1πβ#↔;}k↔;¬ph(4)zαO'7εc'≠'≡S'?ph(&3N[∃βππβK?cNkπS'}q9↓α}s3eβF+K∃β&C↔K∃εKMβπrβπ∪∪O#'?;∞aβ∂?w≠SKπNsQh4PKS#π"βS#∃π≠'7Cf+Iβ7}#↔1βv{Qβ?vceβ3.⊃βSzβ∂?7π+SπSN{;π1ε+≠≠'≡K↔;∂J`4('↔+Qβπg≠=βSFQβ'"β∂?K⊗+OC?v#M↓vK∂↔3J⊃βS=π##∃β␈∪'∨'v184PI"&∃π##πQπ##↔K*β∃β≡{KK↔∨β?;∪Ns≥βS/∪7M1π+OWπfce$4Ph)⎇α/CS↔;&K?84PJOSπv#πK⊃εk?∪↔bβS#↔␈∪'S'~β∂?;≡+CQ↓jiβ?;*β7?∪.aβ↔c&+;∪Mε;?SF+I↓9rp4(4Ri554hRπ;πf{∨d4PJS←=εk?∪↔g→βπK*βπ;πf{∨?W~β'→β&C↔eβ≡CπK∃ε	β∂?nk?9β∞∪OSK∞≠S'?r↓544PK'∃βN1βS#/∪∃β'~βO?7*βS#↔␈∪eβSF+eβ↔∞≠!βO∂#O'≠Jp4(4SyαK↔6{K7WfS'?ph(%⎇αiβO?n+S#'v9βπ␈+Qβ∪N3≠↔K.sQβS/∪7M1π;#'∂Bβ∂?3f+∂S'6+3d4PK∂?;6+eβπ~β7W∂Bβ';≠␈∪7πSN{9βπ~βS#∃ε{K'∨Nsπ18hP4)⎇∧K;S↔⊗3'↔3"β∂?;v+∂S'}p4)5ji44(hR;?'≡)↓"Oπ+K'?/→1α↔↔∪?K≠.aα∪↔&'1$hR;↔↔&c↔OMαCOWC/∪≠3W␈+M%α&+Sπ'`h(4*≡c?CCJaβ';6{K7πbaβ';/Cπ∂Qbβ';C⊗+∂↔'≡(4(27'↑)β∂?nk↔;S≠P4(~⎇πJε≤LBεv↑tεF/↑-↔∨&≤>3zα
⎇πJεmzBπ'⎇≤F&FT∞FF*∞,Wπ⊗↑8Vw&≤M⊗}rAQ&␈∩∞Mε*ε|-&.∨N4π>F≤9αε∂,Tπ⊗/∞,W∞9]→,Ghλ
\h→→,=_<Y$∞{{9.M~;Yd;≤p∩H:7FE_2P6w\2P∃9Xv4rw≥∃⊗⊂&→p{4`.e the "Rani proportional th∞A'¬YSK]
rDAeUYJ~∃β+;S?,≠#↔⊃pq1$Q `L-≤&fN|}&∂ε∂⊃PT⊗≡]V>∂.Dππ//-F/_Q)ε}7>L⊗'&↑$¬]≡=∀∧∞mQQ$≡∞-⎇f.fD¬R∧L(8∀Jbλ=v=≡=⊃PPh(↑f∞w1Q$↑f≥lphT|]f/≡↑,W&B¬T∧n/L≡εF␈.4ε∞vD	V}&]N0hT∨⊗/~ & Hayes-Roth, B
Winston
Gertner
Tapple - Thesis proposal (where he defined all sorts od terms)
TW - Primitives, Prototypes
Quine - on Obstention
Wittgenstein - "Games", family resemblence

Darden - Personal communication
Minsky - drames


	Outtakes
eg, in the cOntext of Shakespearean plays,
the nearest play to "Romeo & Juliet" might be ?,
whereas in terms of overall story plot, "West SiDe Story"
is an obvious selection.)

interaction ... all we have is a starting theory.  Can the analogizer
ask questions to attempt to flesh out the facts needed complete a match?
A: Sure... can be written into a heuristic.

There is nothing in the analogizer which has that screening function.  
However, there are heuristics which
guide the probing; and these provided the information...

by mapping the interpretation of a symbol in one model onto its interpretation
in the other.

----
So now the problem is set up:
The INQUIRY mentioned in Figure 1 will be two of
<Object1, Object2, PartialTheory>,
and the Answer will be that omitted member of the triple.

(It does get a bit confusing when you consider that both Objects will be
represented as theories as well, but oh well...)

Now to address that context mess.  This collection is currently a
bag of miscellaneous condition which serve to guide this particular,
individual analogy.  They will be the same type of heuristics which
are present in the heuristics KB, which we'll now discuss.

	Cute, but unused example

There are other mechanisms: for example, some relation may be generalized.
A simple example would Be going from EQUAL to GreaterThan-or-Equal, or from
(the predicate defining the class) NaturalNumber to (phe predicate for)
Integer.  [EG an abstraction of PrimeNumber might be (the theory of) integers
which have under three prime divisors -- or even the class od objects which are
assembled using 2 components, built using a single operator.
Other instances of that latter theory (in addition to primes) would be 
recipes which require mixing but two ingredients,
or conjuncts containing but two atomic clauses.]

The other formalism, which works by (essentially) "throwing away" some properties,
can readily deal with abstractions formed using that first mechanism.
It is difficult to see how it would handle cases where some predicate
is weakened, or any of the other methods of abstracting.
(Ie what properties do PrimeNumbers and Binary-Ingrediated Recipes share?
Certainly NOT the NumberOfFactors property, nor the ingredient#1 slot.)

We might say
(All x. (Member x PrimeNumber)   => (Cardinality (Factors x)) = 2
  matches
(All x. (Member x BinaryRecipes) => (Cardinality (ingredients x)) = 2
knowing that both Factors and ingredients are functions which return
(something like) the set of constituent parts of their argument.

The vagueness of these terms -- "enough", "properties" and "match" --
supply the flexibility needed to span a number of types of analogy.
This superficial notion implies that determining an analogy involves
simply comparing the obvious features/properties of the two proposed analogues;
and declaring them analogous if this partial match exceeds some theshold.

----
Having laid the ground work with that brief approximations above,
we can now be a bit more formal in our definitions.

@Subsection(Notation, and Definition)
As noted in the above disclaimer,
analogy is a poorly understood, (and hence overused) term.
One reason is that analogies can serve several, closely related purposes,
some od∧AoQ%GPACIJAK]U[KeCQKHAE∃Y←n\4∃β\A¬]CY←≥rASf↓BAeK1CiS←8AoQS
PAQ←1IfAE∃ioKK8ABAa¬SdA←_AiQS9Of@Z4~∃KN↓EKio∃K\Ai]↑A←E)KGif0AKmK9ifXAMSikCQS←]f0A←dA]QCiKYKd\~))QJA¬]CY←≥rASiMKYLA%fAIKMGeSE∃HACf↓BA[CAaS]N↓EKio∃K\AgUEaCeQfA←L↓iQKg∀Aio↑4∃←EU∃Gif\4∀~∀Q]JAoS1XAkg∀ABAg1SOQi1rAIS→MKeK9hAIK→S]Si%←\A←_AC]C1←OrX↓kgS]≤~∃iKI[S]←1←OrA	←ee←]KHAMI←ZA[=IKX[QQK←e∃iSFA1←OSF8~∃)o<AiQS9OfAG¬\AEJ↓C]CY=O←kf↓←]Yr↓SLAi!KrACIJAE←QPA[←⊃KYfA=HAiQ∀~∃←L↓iQJAMC[JAACeiS¬XAiQ∃←er\4∃/JA]SYXA∃YCE←ICiJAQQSfAA←S]h↓S\Ai!JA]KahAgK
iS←\8R~∀~(@A∪,9∧\A¬¬gSFAAkea←MJXAC9HAqAYC]CQS←\@!M←dA]QrASPAo←e-fR~∀4∃β]C1←OSKLAGC\↓gKem∀AKSi!KdAB↓YS]OUSgiSA←dA∧AeKaIKgK]QCiS←9CXAMU]GiS=\\~∃]JAoS1XAMSIghAG=]gSI∃dAiQ∀AYS]≥kSgi%FAMk9GiS←8XAS\↓oQSG ~∃BAMaKCW∃dAkg∃fAC\↓C]CY=OrAi<AG←[5k]SG¬iJ~∃∧AEk]⊃YJA←_AMCGQfAck%GWYr↓i↑Ai!JAQK¬eKd\4∃)QKdAiQkLAMCG%YSiC∀AiQJ↓G←[[U]SGCQS←\A=LAG←5aYKp↓G←]G∃ahXAAKe[SQiS]N4∃iQSLAi↑A	JAI←9JAeCASIYr↓C]HA∃MMSG%K]iYd\~∃
=dAKq¬[aYJ0AiQJ↓gS[a1JAgi¬iK[K9hAiQ¬h@EK1KGie%GSir↓eKgK5EYKf↓MYkS⊂@QMY=nRD~)eKCY1rAK]
←IKf↓BAOe∃ChAI∃CXA←_AS]M=e[Ci%←\AC	←khA∃YKGiISGSid@QC]⊂Aa←gMSEYr4∃O←S9NAiQ∀AoCr0ACE←UhAMYUSHAM1←nACLAoKY0RX~∃]QSGP↓QKYaLAiQJ↓QKCe∃dAi↑Ek]I∃egiC9HDAK1KGie%GSir8~∀Q)!SfAS9M←e[¬iS←\↓[CrA	JAkg∃HAi↑↓g←Ym∀AKYK
ieSF↓ae←E1K[f\$~∀~∀qQKeJx~∃αAIKaeKMK]iCQS←]C0AkgJ↓←LAC9CY←OdASfA]QKeJ4∃g←[∀AOKgQCYhA%fAgi=eKHA%\AiKI[fA←_AC\A¬]CY←≥r\~∃QQSfAUgCOJ↓gKK[LAEkh↓BAgk	GCgJ↓←LAi!JAYS9OkSgQSFA←9J\~∃]JAGQ=gJAi<AeKO¬eHAi!SfAG¬gJACLABAG=[[k]%GCiS=\X~∃9←hAE∃ioKK8ABAa¬SdA←_AaK←AYJXA	khAe¬iQKd4∀AS[¬OS]J↓io↑A!←[←]
kYSRQWdA
←[akQKd[GUYSRR4∃G←[5k]SG¬iS]NZAoQ∃eJAi!JAMSIghASL@E[Kαk?Ke∩βπ;⊃π##∃β≡+∂?;"β'Mβ&C∃β[}K∂∃7∂∪Q84U;∃β←␈+3∪9?!β∃ε#↔π3Ns≥β←O#!βSFQ1β∞cS#?.;!β¬π≠SKπN;#S≠⎇∪←πK"β7πCεK;≥β≡{W3⊂hS∃βn∪∃↓G∪πS#/⊃βS#∞qβC↔↔≠?9α
βS=βε+CO?rα	1βNkπ∨'v)βS←zβ#?7}s∂W3NI↓#?∩β∂?7π+S↔In≠W3'JH4+∂}k7W;N≠πS'v9↓5β>C↔K∃π##∃β4KCOQεKE↓n+7?KJ⊃βπ; βS#∃π≠↔∂?v!β'Mπ##∃β6{'∂∃nKQ8hP4*≠⎇⊃βS#O→βπCπ∪?π∂BβS :∞⎇w⊗ZD
FF*∞W.{{H
⎇H≥~Tλ\Y,<:9Z-lh→;LDH≠p∪λ:44`3 statement
must do a great deal of infEpencIng --
`e must dirsT "looi up" his corpus of facts about flUid flow,
thentk decide which propertieq should carry over to the domain od electricity,
and finally he must map these fluid-related features to corresponding features
for elecTpicity.

It is quite surprising how welh people do at each of these steps.
The first and third seem, epistlemologically, quite straightforward.
The second, however, is next to unfathomable.  
Why is it obvious that electricity is not wet, but that its quantity should,
like the fluids, be conserved across a junction?
<another example>
This issue, of deciding which parts to place irespondence,
is the crux of intelligent analogizing.  
Most of this reseaach effort will be devoted to
developing heuristics which account, first, for standard "human" behaviour,
and then for reasonable behavior - based on the economy of interface criteria
to be discussed below.

<<DIGRESSION>>
There can certainly be analogy-like behaviour between a pair of machines,
as well as pair of people.  
The only requirement would be the same basic purpose -- where some few
bits are transmitted to convey a much larger concept, based on the
assumption that the recipient
will be able to infer the needed additional facts (thereby fleshing out the
model by finding the corresponding mappings.)
<<Note: here using an abstraction of the idea of analogy.>>

-----
Here, from SCENAR
<<Ignore this page.  I think the approach given on the next page is more credible>>

KA: I know/infer that M-F is a cursor positioning command,
and that forward implies the cursor position is incremented.
By how much?

U: Distance(Cursor-Position End(cur-word)).
<<<NO NO NO -- really goes to a character after the NEXT word, if NOT in a word>>>>

KA: So M-F means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position End(cur-word)).
		(unless at boundary).
Correct?

U: Yes.

KA: C-B means
    Cursor-Position := Cursor-Position + -1.
Shall I assume M-B means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position End(cur-word)).
		(unless at boundary)?

U: No -- use Start(cur-word), not End(cur-word).
[Note the quantity added will be negative.]

KA: So M-B means
    Cursor-Position := Cursor-Position + Distance(Cursor-Position Start(cur-word)).
		(unless at boundary),
correct?

U: Yep.

KA: Now onto the text modification commands, M-D and M-<rubout>.  
I have C-D meaning
	DeleteChars(Cursor-Position 1).
If I assume that forward deletion acts like forward movement,
I get M-D meaning
	DeleteChars(Cursor-Position Distance(Cursor-Position End(cur-word))).
Is this correct?

U: Yes.

KA: Should I similarly infer that M-<rubout> means
	DeleteChars(Cursor-Position Distance(Cursor-Position Start(cur-word))).

U: Yes.

KA: Got the forms.  Now explain the cur-word variable, and what the
Start and End functions do.

U: cur-word is the current word, analogous to "current character".

KA: Ok. It's easy to see how the cursor's position determines the current character
-- as it points to a single one.
What does it mean for the cursor to "point to a word"?

U: cur-word is the word which contains the current character.

KA: Ok.

U: What is Start?

KA: Start(word) is the position of the first letter of the string of characters
which form this word.  End(word) is the first character AFTER the end of the word.
<<<NO NO NO - if in word then it refers to the start of the word.
	If at space, then jumps toe PRIOR word.
KA: Note that 1 = length of current single character.
@BEGIN(Multiple)
"Representation" (still unexplored)@*
@BEGIN(Itemize, Spread=0)
@b(Motto:) @i(Non-decomposition way of referring to A.)

@b(Scenario:)  A KB encodes the gestalt A by indicating that it is like B.
It possibly includes the known way(s) in which A and B are similar,
and how different.
(Ex: storing fact that electricity is like water flow.)

@b(Purpose:) Fast look up of B-like things, facilitates 1. and 2. above(?),
allow multiple representations of an object (see KRL perspective), ...

@b(Subcases:) (i) A satisfies some of the equations which B satisfies,
(ii) A exhibits similar behaviour, (iii) A is "known" to have corresponding
internal structure (at some level of abstraction -- see man-AI)...
(Use of prototypes is similar to this; see also use of models...)

@b(Preconditions:)  KB must have some corpus of gestalts, and pretty fancy
inferencing scheme for using this by-analogy-indexing scheme.

@b(Near miss:)  Using simple decomposition scheme, in which the features of
A and B (and their partial match) are explicit.
This is too obvious to be interesting (here), and is no longer gestalt matching.

@b(Examples:)
@END(Itemize)
@END(Multiple)
@BEGIN(Multiple)
"Representation" (still unexplored)@*
@BEGIN(Itemize, Spread=0)
@b(Motto:) @i(Non-decomposition way of referring to A.)

@b(Scenario:)  A KB encodes the gestalt A by indicating that it is like B.
It possibly includes the known way(s) in which A and B are similar,
and how different.
(Ex: storing fact that electricity is like water flow.)

@b(Purpose:) Fast look up of B-like things, facilitates 1. and 2. above(?),
allow multiple representations of an object (see KRL perspective), ...

@b(Subcases:) (i) A satisfies some of the equations which B satisfies,
(ii) A exhibits similar behaviour, (iii) A is "known" to have corresponding
internal structure (at some level of abstraction -- see man-AI)...
(Use of prototypes is similar to this; see also use of models...)

@b(Preconditions:)  KB must have some corpus of gestalts, and pretty fancy
inferencing scheme for using this by-analogy-indexing scheme.

@b(Near miss:)  Using simple decomposition scheme, in which the features of
A and B (and their partial match) are explicit.
This is too obvious to be interesting (here), and is no longer gestalt matching.

@b(Examples:)
@END(Itemize)
@END(Multiple)